Minimax Multi-Task Learning

نویسندگان

  • Nishant A. Mehta
  • Dongryeol Lee
  • Alexander G. Gray
چکیده

A multi-task learning (MTL) algorithm learns an inductive bias to learn several tasks together. MTL is incredibly pervasive in machine learning: it has natural connections to random effects models [5]; user preference prediction (including collaborative filtering) can be framed as MTL [6]; multi-class classification admits the popular one-vs-all and all-pairs MTL reductions; and MTL admits provably good learning in settings where single-task learning is hopeless [3, 4]. But if we see a random set of tasks today, which of the tasks will matter tomorrow? Not knowing the challenges nature will pose in the future, it is wise to mitigate the worst case by ensuring a minimum proficiency on each task.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Minimax Multi-Task Learning and a Generalized Loss-Compositional Paradigm for MTL

Since its inception, the modus operandi of multi-task learning (MTL) has been to minimize the task-wise mean of the empirical risks. We introduce a generalized loss-compositional paradigm for MTL that includes a spectrum of formulations as a subfamily. One endpoint of this spectrum is minimax MTL: a new MTL formulation that minimizes the maximum of the tasks’ empirical risks. Via a certain rela...

متن کامل

Revisiting Stein's paradox: multi-task averaging

We present a multi-task learning approach to jointly estimate the means of multiple independent distributions from samples. The proposed multi-task averaging (MTA) algorithm results in a convex combination of the individual task’s sample averages. We derive the optimal amount of regularization for the two task case for the minimum risk estimator and a minimax estimator, and show that the optima...

متن کامل

Convergence rate of Bayesian tensor estimator and its minimax optimality

We investigate the statistical convergence rate of a Bayesian low-rank tensor estimator, and derive the minimax optimal rate for learning a lowrank tensor. Our problem setting is the regression problem where the regression coefficient forms a tensor structure. This problem setting occurs in many practical applications, such as collaborative filtering, multi-task learning, and spatiotemporal dat...

متن کامل

Gaussian process nonparametric tensor estimator and its minimax optimality

We investigate the statistical efficiency of a nonparametric Gaussian process method for a nonlinear tensor estimation problem. Low-rank tensor estimation has been used as a method to learn higher order relations among several data sources in a wide range of applications, such as multitask learning, recommendation systems, and spatiotemporal analysis. We consider a general setting where a commo...

متن کامل

Manifold regularization based on Nystr{ö}m type subsampling

In this paper, we study the Nyström type subsampling for large scale kernel methods to reduce the computational complexities of big data. We discuss the multi-penalty regularization scheme based on Nyström type subsampling which is motivated from well-studied manifold regularization schemes. We develop a theoretical analysis of multi-penalty least-square regularization scheme under the general ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012